32 research outputs found

    Individual Differences in the Production and Perception of Prosodic Boundaries in American English

    Full text link
    Theoretical interest in the relation between speech production and perception has led to research on whether individual speaker-listenersā€™ production patterns are linked to the information they attend to in perception. However, for prosodic structure, the production-perception relation has received little attention. This dissertation investigates the hypothesis that individual participants vary in their production and perception of prosodic boundaries, and that the properties they use to signal prosodic contrasts are closely related to the properties used to perceive those contrasts. In an acoustic study, 32 native speakers read eight sentence pairs in which the type of prosodic boundary (word and Intonational Phrase boundary) differed. Phrase-final and initial temporal modulation, pause duration, and pitch reset at the boundaries were analyzed. Results showed that, as a group, speakers lengthened two phrase-final syllables, shortened the post-boundary syllable, and produced a pause and pitch reset when producing an IP boundary. However, individual speakers differed in both the phonetic features they used and the degree to which they used them to distinguish IP from word boundaries. Speakers differed in the onset and scope of phrase-final lengthening and presence of shortening (resulting in six different patterns), pause duration, and the degree of pitch reset at the IP boundary, including in ways that demonstrated a trading relation between these properties for some individuals and an enhancement relation for others. The results suggest that individuals differ in how they encode prosodic structure and offer insights into the complex mechanism of temporal modulation at IP boundaries. In an eye-tracking study that tested the perceptual use of these acoustic properties by 19 of these same participants, the productions of a model talker were manipulated to systematically vary the presence and degree of IP boundary cues. Twelve unique combinations of cues, based on the main patterns in the production study, were created from four phrase-final lengthening patterns, two pause durations (presence/absence of a pause), and three pitch reset values. Patterns of fixation on the target boundary image over time showed that, as a group, listeners attended to the information conveyed by pause duration and final lengthening as that information became available, with pause being the most salient cue for IP boundary perception. A clear pattern did not emerge for pitch reset. Adding to the body of research on weighting of the acoustic properties for IP boundary, these results characterize the time-course of the perceptual use of different combinations of IP boundary-related properties. To examine the production-perception relation, a series of perceptual models in which each participantā€™s average production values were entered as predictor variables tested whether the production patterns are reflected in the same individualsā€™ perception. The results did not provide statistically significant evidence of a production-perception relation, although a trend in the pause duration models across three different conditions was suggestive of a pattern in which individuals with longer pause durations were faster to fixate on the IP boundary target than those with shorter pause durations. The lack of evidence of a close production-perception relation for individual speaker-listeners is inconsistent with the main hypothesis but is in line with the results of several previous studies that have investigated this relation for segmental properties. Further investigation is needed to determine whether, despite the absence of a strong production-perception relation, specific individuals might nonetheless show the link predicted by some theoretical approaches.PHDLinguisticsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162927/1/jiseungk_1.pd

    Cryptanalysis of FRS Obfuscation based on the CLT13 Multilinear Map

    Get PDF
    We present a classical polynomial time attack against the FRS branching program obfuscator of Fernando-Rasmussen-Sahai (Asiacryptā€™17) (with one zerotest parameter), which is robust against all known classical cryptanalyses on obfuscators, when instantiated with the CLT13 multilinear map. The first step is to recover a plaintext modulus of CLT13 multilinear map. To achieve the goal, we apply the Coron and Notarnicola (Asiacrypt\u2719) algorithm. However, because of parameter issues, the algorithm cannot be used directly. In order to detour the issue, we convert a FRS obfuscator into a new program containing a small message space. Through the conversion, we obtain two zerotest parameters and encodings of zero except for two nonzero slots. Then, they are used to mitigate parameter constraints of the message space recovering algorithm. Then, we propose a cryptanalysis of the FRS obfuscation based on the recovered message space. We show that there exist two functionally equivalent programs such that their obfuscated programs are computationally distinguishable. Thus, the FRS scheme does not satisfy the desired security without any additional constraints

    How to Meet Ternary LWE Keys on Babaiā€™s Nearest Plane

    Get PDF
    A cryptographic primitive based on the Learning With Errors (LWE) problem with its variants is a promising candidate for the efficient quantum-resistant public key cryptosystem. The recent schemes use the LWE problem with a small-norm or sparse secret key for better efficiency. Such constraints, however, lead to more tailor-made attacks and thus are a trade-off between efficiency and security. Improving the algorithm for the LWE problem with the constraints thus has a significant consequence in the concrete security of schemes. In this paper, we present a new hybrid attack on the LWE problem. This new attack combines the primal lattice attack and an improved MitM attack called Meet-LWE, answering an open problem posed by May [Crypto\u2721]. According to our estimation, the new hybrid attack performs better than the previous attacks for the LWE problems with a sparse ternary secret key, which plays the significant role for the efficiency of fully homomorphic encryption schemes. In terms of the technical part, we generalize the Meet-LWE algorithm to be compatible with Babai\u27s nearest plane algorithm. As a side contribution, we remove the error guessing step in Meet-LWE, resolving another open question

    Improved Universal Thresholdizer from Threshold Fully Homomorphic Encryption

    Get PDF
    The Universal Thresholdizer (CRYPTO\u2718) is a cryptographic scheme that facilitates the transformation of any cryptosystem into a threshold cryptosystem, making it a versatile tool for threshold cryptography. For instance, this primitive enables the black-box construction of a one-round threshold signature scheme based on the Learning with Error problem, as well as a one-round threshold chosen ciphertext attack-secure public key encryption, by being combined with non-threshold schemes. The compiler is constructed in a modular fashion and includes a compact threshold fully homomorphic encryption, a non-interactive zero-knowledge proof with preprocessing, and a non-interactive commitment. An instantiation of the Universal Thresholdizer can be achieved through the construction of a compact threshold fully homomorphic encryption. Currently, there are two threshold fully homomorphic encryptions based on linear secret sharing, with one using Shamir\u27s secret sharing and the other using the {0,1}\{0,1\}-linear secret sharing scheme ({0,1}\{0,1\}-LSSS). The former fails to achieve compactness as the size of its ciphertext is O(Nlogā”N)O(N\log N), where NN is the number of participants in the distributed system. Meanwhile, the latter provides compactness, with a ciphertext size of O(logā”N)O(\log N), but requires O(N4.3)O(N^{4.3}) share keys on each party, leading to high communication costs. In this paper, we propose a communication-efficient Universal Thresholdizer by revisiting the threshold fully homomorphic encryption. Our scheme reduces the number of share keys required on each party to O(N2+o(1))O(N^{2+o(1)}) while preserving the ciphertext size of O(logā”N)O(\log N). To achieve this, we introduce a new linear secret sharing scheme called TreeSSS, which requires a smaller number of shared keys and satisfies compactness. As a result, the Threshold Fully Homomorphic Encryption underlying our linear secret sharing scheme has fewer shared keys during the setup algorithm and reduced communication costs during the partial decryption algorithm. Moreover, the construction of a Universal Thresholdizer can be achieved through the use of TreeSSS, as it reduces the number of shared keys compared to previous constructions. Additionally, TreeSSS may be of independent interest, as it improves the efficiency in terms of communication costs when used to replace {0,1}\{0,1\}-LSSS

    Adventures in Crypto Dark Matter: Attacks, Fixes for Weak Pseudorandom Functions

    Get PDF
    A weak pseudorandom function (weak PRF) is one of the most important cryptographic primitives for its efficiency although it has lower security than a standard PRF. Recently, Boneh et al. (TCC\u2718) introduced two types of new weak PRF candidates, which are called a basic Mod-2/Mod-3 and alternative Mod-2/Mod-3 weak PRF. Both use the mixture of linear computations defined on different small moduli to satisfy conceptual simplicity, low complexity (depth-2 ACC0{\sf ACC^0}) and MPC friendliness. In fact, the new candidates are conjectured to be exponentially secure against any adversary that allows exponentially many samples, and a basic Mod-2/Mod-3 weak PRF is the only candidate that satisfies all features above. However, none of the direct attacks which focus on basic and alternative Mod-2/Mod-3 weak PRFs use their own structures. In this paper, we investigate weak PRFs from two perspectives; attacks, fixes. We first propose direct attacks for an alternative Mod-2/Mod-3 weak PRF and a basic Mod-2/Mod-3 weak PRF when a circulant matrix is used as a secret key. For an alternative Mod-2/Mod-3 weak PRF, we prove that the adversary\u27s advantage is at least 2āˆ’0.105n2^{-0.105n}, where nn is the size of the input space of the weak PRF. Similarly, we show that the advantage of our heuristic attack to the weak PRF with a circulant matrix key is larger than 2āˆ’0.21n2^{-0.21n}, which is contrary to the previous expectation that `structured secret key\u27 does not affect the security of a weak PRF. Thus, for an optimistic parameter choice n=2Ī»n = 2\lambda for the security parameter Ī»\lambda, parameters should be increased to preserve Ī»\lambda-bit security when an adversary obtains exponentially many samples. Next, we suggest a simple method for repairing two weak PRFs affected by our attack while preserving the parameters

    Algorithms for CRT-variant of Approximate Greatest Common Divisor Problem

    Get PDF
    The approximate greatest common divisor problem (ACD) and its variants have been used to construct many cryptographic primitives. In particular, variants of the ACD problem based on Chinese remainder theorem (CRT) are exploited in the constructions of a batch fully homomorphic encryption to encrypt multiple messages in one ciphertext. Despite the utility of the CRT-variant scheme, the algorithms to solve its security foundation have not been studied well compared to the original ACD based scheme. In this paper, we propose two algorithms for solving the CCK-ACD problem, which is used to construct a batch fully homomorphic encryption over integers. To achieve the goal, we revisit the orthogonal lattice attack and simultaneous Diophantine approximation algorithm. Both two algorithms take the same time complexity 2O~(Ī³(Ī·āˆ’Ļ)2)2^{\tilde{O}(\frac{\gamma}{(\eta-\rho)^2})} up to a polynomial factor to solve the CCK-ACD problem for the bit size of samples Ī³\gamma, secret primes Ī·\eta, and error bound Ļ\rho. Compared to Chen and Nguyen\u27s algorithm in Eurocrypt\u27 12, which takes O~(2Ļ/2)\tilde{O}(2^{\rho/2}) complexity, our algorithm gives the first parameter condition related to Ī·\eta and Ī³\gamma size. We also report the experimental results for our attack upon several parameters. From the results, we can see that our algorithms work well both in theoretical and experimental terms

    CascadeHD: Efficient Many-Class Learning Framework Using Hyperdimensional Computing

    No full text
    The brain-inspired hyperdimensional computing (HDC) gains attention as a light-weight and extremely parallelizable learning solution alternative to deep neural networks. Prior research shows the effectiveness of HDC-based learning on less powerful systems such as edge computing devices. However, the many-class classification problem is beyond the focus of mainstream HDC research; the existing HDC would not provide sufficient quality and efficiency due to its coarse-grained training. In this paper, we propose an efficient many-class learning framework, called CascadeHD, which identifies latent high-dimensional patterns of many classes holistically while learning a hierarchical inference structure using a novel meta-learning algorithm for high efficiency. Our evaluation conducted on the NVIDIA Jetson device family shows that CascadeHD improves the accuracy for many-class classification by up to 18% while achieving 32% speedup compared to the existing HDC. Ā© 2021 IEEE

    Stop voicing contrast in American English: Data of individual speakers in trochaic and iambic words in different prosodic structural contexts

    No full text
    The data reported in this article contain eleven (6 female and 5 male) individual speakerā€™s speech production patterns for the word-initial voiced and voiceless stops (/p,t/ and /b,d/) in American English. The production patterns are documented in the acoustic parameter: the Integrated Voicing Index (IVI) obtained from Voice Onset Time (VOT) and voicing duration in the stop closure (Voicing-in-Closure), in various prosodic contexts: lexically-stressed vs. unstressed; accented (focused) vs. unaccented (unfocused); phrase-initial vs. phrase-medial. The data also contain a CVS file with each speaker׳s mean values of the IVI, VOT and Voicing-in-Closure for each prosodic condition for the voiced and voiceless stops, along with the information about the speaker gender. For further discussion of the data, please refer to the full length article entitled ā€œProsodic-structural modulation of stop voicing contrast along the VOT continuum in trochaic and iambic words in American Englishā€Ā (Kim et al., 2018)
    corecore